In [ ]:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np

Text Feature Extraction with Bag-of-Words

In many tasks, like in the classical spam detection, your input data is text. Free text with variables length is very far from the fixed length numeric representation that we need to do machine learning with scikit-learn. However, there is an easy and effective way to go from text data to a numeric representation that we can use with our models, called bag-of-words.

Lets assume that each sample in your dataset is represented as one string, which could be just a sentence, an email or a whole news article or book. To represent the sample, we first split the string into a list of tokens, which correspond to (somewhat normalized) words. A simple way to do this to just split by whitespace, and then lowercase the word. Then, we built a vocabulary of all tokens (lowercased words) that appear in our whole dataset. This is usually a very large vocabulary. Finally, looking at our single sample, we could how often each word in the vocabulary appears. We represent our string by a vector, where each entry is how often a given word in the vocabular appears in the string.

As each sample will only contain very few of the words, most entries will be zero, leading to a very high-dimensional but sparse representation.

The method is called bag-of-words as the order of the words is lost entirely.


In [ ]:
X = ["Some say the world will end in fire,",
     "Some say in ice."]

In [ ]:
len(X)

In [ ]:
from sklearn.feature_extraction.text import CountVectorizer

vectorizer = CountVectorizer()
vectorizer.fit(X)

In [ ]:
vectorizer.vocabulary_

In [ ]:
X_bag_of_words = vectorizer.transform(X)

In [ ]:
X_bag_of_words.shape

In [ ]:
X_bag_of_words

In [ ]:
X_bag_of_words.toarray()

In [ ]:
vectorizer.get_feature_names()

In [ ]:
vectorizer.inverse_transform(X_bag_of_words)

Tfidf Encoding

A useful transformation that is often applied to the bag-of-word encoding is the so-called term-frequency inverse-document-frequency (Tfidf) scaling, which is a non-linear transformation of the word counts.

The Tfidf encoding rescales words that are common to have less weight:


In [ ]:
from sklearn.feature_extraction.text import TfidfVectorizer

tfidf_vectorizer = TfidfVectorizer()
tfidf_vectorizer.fit(X)

In [ ]:
import numpy as np
np.set_printoptions(precision=2)

print(tfidf_vectorizer.transform(X).toarray())

Bigrams and N-Grams

Entirely discarding word order is not always a good idea, as composite phrases often have specific meaning, and modifiers like "not" can invert the meaning of words. A simple way to include some word order are n-grams, which don't only look at a single token, but at all pairs of neighborhing tokens:


In [ ]:
# look at sequences of tokens of minimum length 2 and maximum length 2
bigram_vectorizer = CountVectorizer(ngram_range=(2, 2))
bigram_vectorizer.fit(X)

In [ ]:
bigram_vectorizer.get_feature_names()

In [ ]:
bigram_vectorizer.transform(X).toarray()

Often we want to include unigrams (sigle tokens) and bigrams:


In [ ]:
gram_vectorizer = CountVectorizer(ngram_range=(1, 2))
gram_vectorizer.fit(X)

In [ ]:
gram_vectorizer.get_feature_names()

In [ ]:
gram_vectorizer.transform(X).toarray()

Character n-grams

Sometimes it is also helpful to not look at words, but instead single character. That is particularly useful if you have very noisy data, want to identify the language, or we want to predict something about a single word. We can simply look at characters instead of words by setting analyzer="char". Looking at single characters is usually not very informative, but looking at longer n-grams of characters can be:


In [ ]:
char_vectorizer = CountVectorizer(ngram_range=(2, 2), analyzer="char")
char_vectorizer.fit(X)

In [ ]:
print(char_vectorizer.get_feature_names())

In [ ]: